198 research outputs found

    "On Hochberg et al.'s, the tragedy of the reviewers commons"

    Get PDF
    We discuss each of the recommendations made by Hochberg et al. (2009) to prevent the “tragedy of the reviewer commons”. Having scientific journals share a common database of reviewers would be to recreate a bureaucratic organization, where extra-scientific considerations prevailed. Pre-reviewing of papers by colleagues is a widespread practice but raises problems of coordination. Revising manuscripts in line with all reviewers’ recommendations presupposes that recommendations converge, which is acrobatic. Signing an undertaking that authors have taken into accounts all reviewers’ comments is both authoritarian and sterilizing. Sending previous comments with subsequent submissions to other journals amounts to creating a cartel and a single all-encompassing journal, which again is sterilizing. Using young scientists as reviewers is highly risky: they might prove very severe; and if they have not yet published themselves, the recommendation violates the principle of peer review. Asking reviewers to be more severe would only create a crisis in the publishing houses and actually increase reviewers’ workloads. The criticisms of the behavior of authors looking to publish in the best journals are unfair: it is natural for scholars to try to publish in the best journals and not to resign themselves to being second rate. Punishing lazy reviewers would only lower the quality of reports: instead, we favor the idea of paying reviewers “in kind” with, say, complimentary books or papers.Reviewer;Referee;Editor;Publisher;Publishing;Tragedy of the Commons;Hochberg

    Biproportional Techniques in Input-Output Analysis: Table Updating and Structural Analysis

    Get PDF
    This paper is dedicated to the contributions of Sir Richard Stone, Michael Bacharach, and Philip Israilevich. It starts out with a brief history of biproportional techniques and related matrix balancing algorithms. We then discuss the RAS algorithm developed by Sir Richard Stone and others. We follow that by evaluating the interpretability of the product of the adjustment parameters, generally known as R and S. We then move on to discuss the various formal formulations of other biproportional approaches and discuss what defines an algorithm as ñ€Ɠbiproportionalñ€. After mentioning a number of competing optimization algorithms that cannot fall under the rubric of being biproportional, we reflect upon how some of their features have been included into the biproportional setting (the ability to fix the value of interior cells of the matrix being adjusted and of incorporating data reliability into the algorithm). We wind up the paper by pointing out some areas that could use further investigation.Input-Output Economics; RAS; data raking; iterative proportional fitting; estimating missing data

    La maximisation du taux de profit

    No full text
    On the traditional micro-economic theory, firms are supposed to maximise pure profit. We study what happened when we take into consideration shareholders and the financial profit remunerating the financial capital. We show that it is necessary to surrender the financial profit maximisation to use the rate of financial profit maximisation. The cases of concurrence with fix coefficient of capital, monopoly with fix coefficient of capital, monopoly with variable coefficient of capital are studied, and the role of contraints of rentability are treated. The solutions given by the profit maximisation and by the rate of profitmaximisation are compared. We conclude to a reduction of the volume of investissement on some situations, a non automatic clearing of markets andto the necessity to revise some conclusions of industrial, normative and wellfare economics.Dans la thĂ©orie micro-Ă©conomique traditionnelle, les firmes sont supposĂ©es maximiser le profit pur. Nous Ă©tudions ici l'impact de la prise en compte des actionnaires et d'un profit financier rĂ©munĂ©rant le capital financier. On montre qu'il faut abandonner la maximisation du profit financier pour considĂ©rer la maximisation du taux de profit financier. Les cas de concurrence avec coefficient fixe de capital, monopole avec coefficient fixe de capital, monopole avec coefficient variable de capital sont Ă©tudiĂ©s, et le rĂŽle des contraintes de rentabilitĂ© est traitĂ©. Les solutions fournies par la maximisation du profit et la maximisation du taux de profit sont comparĂ©es. On conclut Ă  une rĂ©duction du volume d'investissement dans certaines situations, Ă  un non-clearing automatique des marchĂ©s et Ă  la nĂ©cessitĂ© de revoir certaines conclusions de l'Ă©conomie industrielle, normative et du bien-ĂȘtre

    On the Convergence of the Generalized Ibn Ezra Value

    No full text
    International audienceIbn Ezra (Sefar ha-Mispar (The Book of the Number, in Hebrew), Verona (German trans: Silberberg M. (1895)). Kauffmann, Frankfurt am Main,1146), Rabinovitch (Probability and statistical inference in medieval Jewish literature. University of Toronto Press, Toronto,1973) and O’Neill (Math Soc Sci 2(4):345–371,1982)proposed a method for solving the “rights arbitration problem” (one of the historical problems of “bankruptcy”) for n claimants when the estate E is equal to the largest claim. However, when the greatest claim is for less than the estate, the question of what to do with the difference between E and the largest claim is posed. Alcalde et al.’s (Econ Theory 26(1):103–114,2005) Generalized Ibn Ezra Value (GiEV), solves the problem in T iterations, of n steps. By using Monte-Carlo experiments, we show that: (i) T grows linearly with the number of claimants, which makes GiEV rapidly impracticable for real applications. (ii) The more E is close to the total claim d, themore T grows: T linearly grows when E exponentially approaches d by a factor 10. Moreover, we proved through theory that GiEV fails to provide a solution in a finite number of iterations for the trivial case E = d, whereas it should obviously find a solution in one iteration. So, even if GiEV is convergent, the sum of claims d appears as an asymptote: the number of iterations tends to infinite when the estate E approaches the claims total d. We conclude that GiEV is inefficient and usable only when: (1) the number of claimants is low, and (2) the estate E is largely lower than the total claims d

    Pollution models and inverse distance weighting: some critical remarks

    No full text
    International audienceWhen evaluating the impact of pollution, measurements from remote stations are often weighted by the inverse of distance raised to some nonnegative power (IDW). This is derived from Shepard's method of spatial interpolation (1968). The paper discusses the arbitrary character of the exponent of distance and the problem of monitoring stations that are close to the reference point. From elementary laws of physics, it is determined which exponent of distance should be chosen (or its upper bound) depending on the form of pollution encountered, such as radiant pollution (including radioactivity and sound), air pollution (plumes, puffs, and motionless clouds by using the classical Gaussian model), and polluted rivers. The case where a station is confused with the reference point (or zero distance) is also discussed: in real cases this station imposes its measurement on the whole area regardless of the measurements made by other stations. This is a serious flaw when evaluating the mean pollution of an area. However, it is shown that this is not so in the case of a continuum of monitoring stations, and the measurement at the reference point and for the whole area may differ, which is satisfactory

    About the reinterpretation of the Ghosh model as a price model

    No full text
    The Ghosh model assumes that, in an input-output framework, each commodity is sold to each sector in fixed proportions. This model is strongly criticizedbecause it seems implausible in the traditional input-output field. To answer to these critics, Dietzenbacher stresses that it can be reinterpreted as a price model: the Leontief price model is equivalent to the Ghosh model when this one is interpreted as a price model. This paper shows that the interpretation of the Ghosh model as a price model cannot be accepted because Dietzenbacher makes a strong assumption, dichotomy, while the Ghosh model does notdetermine prices

    Failure of the normalization of the RAS method : absorption and fabrication effects are still incorrect

    No full text
    The r and s vectors of the RAS method of updating matrices are presented often as corresponding to an absorption effect and a fabrication effect. Here, it is proved that these vectors are not identified, so their interpretation in terms of fabrication and absorption effect is incorrect and even if a normalization was proposed to remove underidentification, this normalization fails and poses many difficulties

    Industrial organization with profit rate maximizing firms

    No full text
    We study the impact on industrial organization of the switching of objective function, from pure profit to profit rate maximization. The output level of firm is lower at optimum. This lead to a new conception of efficiency. Cases of no coordination are considered. In perfect competition, price signal disappears; factors remain paid at their marginal productivity, but modified. In imperfect competition, reaction functions may vanish even if collusion remains possible; limit of oligopoly remains perfect competition of profit rate; the paradox of Bertrand may remain; a new concept is studied: mixed duopoly, where firms can choose and change their objective
    • 

    corecore